999 research outputs found

    On the Capacity Region of the Deterministic Y-Channel with Common and Private Messages

    Full text link
    In multi user Gaussian relay networks, it is desirable to transmit private information to each user as well as common information to all of them. However, the capacity region of such networks with both kinds of information is not easy to characterize. The prior art used simple linear deterministic models in order to approximate the capacities of these Gaussian networks. This paper discusses the capacity region of the deterministic Y-channel with private and common messages. In this channel, each user aims at delivering two private messages to the other two users in addition to a common message directed towards both of them. As there is no direct link between the users, all messages must pass through an intermediate relay. We present outer-bounds on the rate region using genie aided and cut-set bounds. Then, we develop a greedy scheme to define an achievable region and show that at a certain number of levels at the relay, our achievable region coincides with the upper bound. Finally, we argue that these bounds for this setup are not sufficient to characterize the capacity region.Comment: 4 figures, 7 page

    SENTIMENT ANALYSIS IN SOCIAL NETWORKS USING NAiVE BAYES ALGORITHM

    Get PDF
    This report is concerned with the opinion mining and sentiment analysis in social networks especially Twitter, it aims to give a brief insight into the ongoing research in sentiment analysis algorithms and techniques, and graphical representation of the statistical results by applying the sentiment analysis on social networks. The objectives of this project is to perform a detailed research on the latest techniques in the process, and to enhance the current approaches of sentiment analysis by building a tool with an ability to provide statistical information, graphically represented -to an acceptable degree of accuracy- to show the collective consciousness of Internet users. The implementation of this project will most probably use third party tools available on the web to reduce time needed and for rapid prototyping in the initial stages of implementation

    ComQA: A Community-sourced Dataset for Complex Factoid Question Answering with Paraphrase Clusters

    Get PDF
    To bridge the gap between the capabilities of the state-of-the-art in factoid question answering (QA) and what users ask, we need large datasets of real user questions that capture the various question phenomena users are interested in, and the diverse ways in which these questions are formulated. We introduce ComQA, a large dataset of real user questions that exhibit different challenging aspects such as compositionality, temporal reasoning, and comparisons. ComQA questions come from the WikiAnswers community QA platform, which typically contains questions that are not satisfactorily answerable by existing search engine technology. Through a large crowdsourcing effort, we clean the question dataset, group questions into paraphrase clusters, and annotate clusters with their answers. ComQA contains 11,214 questions grouped into 4,834 paraphrase clusters. We detail the process of constructing ComQA, including the measures taken to ensure its high quality while making effective use of crowdsourcing. We also present an extensive analysis of the dataset and the results achieved by state-of-the-art systems on ComQA, demonstrating that our dataset can be a driver of future research on QA.Comment: 11 pages, NAACL 201

    Novel sampling techniques for reservoir history matching optimisation and uncertainty quantification in flow prediction

    Get PDF
    Modern reservoir management has an increasing focus on accurately predicting the likely range of field recoveries. A variety of assisted history matching techniques has been developed across the research community concerned with this topic. These techniques are based on obtaining multiple models that closely reproduce the historical flow behaviour of a reservoir. The set of resulted history matched models is then used to quantify uncertainty in predicting the future performance of the reservoir and providing economic evaluations for different field development strategies. The key step in this workflow is to employ algorithms that sample the parameter space in an efficient but appropriate manner. The algorithm choice has an impact on how fast a model is obtained and how well the model fits the production data. The sampling techniques that have been developed to date include, among others, gradient based methods, evolutionary algorithms, and ensemble Kalman filter (EnKF). This thesis has investigated and further developed the following sampling and inference techniques: Particle Swarm Optimisation (PSO), Hamiltonian Monte Carlo, and Population Markov Chain Monte Carlo. The inspected techniques have the capability of navigating the parameter space and producing history matched models that can be used to quantify the uncertainty in the forecasts in a faster and more reliable way. The analysis of these techniques, compared with Neighbourhood Algorithm (NA), has shown how the different techniques affect the predicted recovery from petroleum systems and the benefits of the developed methods over the NA. The history matching problem is multi-objective in nature, with the production data possibly consisting of multiple types, coming from different wells, and collected at different times. Multiple objectives can be constructed from these data and explicitly be optimised in the multi-objective scheme. The thesis has extended the PSO to handle multi-objective history matching problems in which a number of possible conflicting objectives must be satisfied simultaneously. The benefits and efficiency of innovative multi-objective particle swarm scheme (MOPSO) are demonstrated for synthetic reservoirs. It is demonstrated that the MOPSO procedure can provide a substantial improvement in finding a diverse set of good fitting models with a fewer number of very costly forward simulations runs than the standard single objective case, depending on how the objectives are constructed. The thesis has also shown how to tackle a large number of unknown parameters through the coupling of high performance global optimisation algorithms, such as PSO, with model reduction techniques such as kernel principal component analysis (PCA), for parameterising spatially correlated random fields. The results of the PSO-PCA coupling applied to a recent SPE benchmark history matching problem have demonstrated that the approach is indeed applicable for practical problems. A comparison of PSO with the EnKF data assimilation method has been carried out and has concluded that both methods have obtained comparable results on the example case. This point reinforces the need for using a range of assisted history matching algorithms for more confidence in predictions

    Properties of an anti vague filter in BL-algebras

    Get PDF
    ABSTRACTIn this paper, we introduce the notion of an anti vague filter of a BL-algebra with illustration, and obtain some related properties. Further, we investigate some equivalent conditions of anti vague filter. 
    corecore